1 |
Towards Unsupervised Content Disentanglement in Sentence Representations via Syntactic Roles
|
|
|
|
In: CtrlGen: Controllable Generative Modeling in Language and Vision ; https://hal.inria.fr/hal-03540084 ; CtrlGen: Controllable Generative Modeling in Language and Vision, Jan 2022, virtual, France (2022)
|
|
BASE
|
|
Show details
|
|
2 |
Can Character-based Language Models Improve Downstream Task Performance in Low-Resource and Noisy Language Scenarios?
|
|
|
|
In: Seventh Workshop on Noisy User-generated Text (W-NUT 2021, colocated with EMNLP 2021) ; https://hal.inria.fr/hal-03527328 ; Seventh Workshop on Noisy User-generated Text (W-NUT 2021, colocated with EMNLP 2021), Jan 2022, punta cana, Dominican Republic ; https://aclanthology.org/2021.wnut-1.47/ (2022)
|
|
BASE
|
|
Show details
|
|
3 |
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT
|
|
|
|
In: https://hal.inria.fr/hal-03161685 ; 2021 (2021)
|
|
BASE
|
|
Show details
|
|
4 |
Can Multilingual Language Models Transfer to an Unseen Dialect? A Case Study on North African Arabizi
|
|
|
|
In: https://hal.inria.fr/hal-03161677 ; 2021 (2021)
|
|
BASE
|
|
Show details
|
|
5 |
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT
|
|
|
|
In: EACL 2021 - The 16th Conference of the European Chapter of the Association for Computational Linguistics ; https://hal.inria.fr/hal-03239087 ; EACL 2021 - The 16th Conference of the European Chapter of the Association for Computational Linguistics, Apr 2021, Kyiv / Virtual, Ukraine ; https://2021.eacl.org/ (2021)
|
|
BASE
|
|
Show details
|
|
6 |
When Being Unseen from mBERT is just the Beginning: Handling New Languages With Multilingual Language Models
|
|
|
|
In: NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies ; https://hal.inria.fr/hal-03251105 ; NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Jun 2021, Mexico City, Mexico (2021)
|
|
BASE
|
|
Show details
|
|
7 |
PAGnol: An Extra-Large French Generative Model
|
|
|
|
In: https://hal.inria.fr/hal-03540159 ; [Research Report] LightON. 2021 (2021)
|
|
BASE
|
|
Show details
|
|
8 |
Synthetic Data Augmentation for Zero-Shot Cross-Lingual Question Answering
|
|
|
|
In: https://hal.inria.fr/hal-03109187 ; 2021 (2021)
|
|
BASE
|
|
Show details
|
|
9 |
Noisy UGC Translation at the Character Level: Revisiting Open-Vocabulary Capabilities and Robustness of Char-Based Models
|
|
|
|
In: W-NUT 2021 - 7th Workshop on Noisy User-generated Text (colocated with EMNLP 2021) ; https://hal.inria.fr/hal-03540174 ; W-NUT 2021 - 7th Workshop on Noisy User-generated Text (colocated with EMNLP 2021), Association for computational linguistics, Nov 2021, Punta Cana, Dominican Republic (2021)
|
|
BASE
|
|
Show details
|
|
10 |
Understanding the Impact of UGC Specificities on Translation Quality
|
|
|
|
In: W-NUT 2021 - Seventh Workshop on Noisy User-generated Text (colocated with EMNLP 2021) ; https://hal.inria.fr/hal-03540175 ; W-NUT 2021 - Seventh Workshop on Noisy User-generated Text (colocated with EMNLP 2021), association for computational linguistics, Nov 2021, Punta Cana, Dominican Republic (2021)
|
|
BASE
|
|
Show details
|
|
11 |
Challenging the Semi-Supervised VAE Framework for Text Classification
|
|
|
|
In: Second Workshop on Insights from Negative Results in NLP (colocated with EMNLP) ; https://hal.inria.fr/hal-03540081 ; Second Workshop on Insights from Negative Results in NLP (colocated with EMNLP), Nov 2021, Punta Cana, Dominican Republic ; https://insights-workshop.github.io/2021/ (2021)
|
|
BASE
|
|
Show details
|
|
13 |
IWPT 2021 Shared Task Data and System Outputs
|
|
|
|
Abstract:
This package contains data used in the IWPT 2021 shared task. It contains training, development and test (evaluation) datasets. The data is based on a subset of Universal Dependencies release 2.7 (http://hdl.handle.net/11234/1-3424) but some treebanks contain additional enhanced annotations. Moreover, not all of these additions became part of Universal Dependencies release 2.8 (http://hdl.handle.net/11234/1-3687), which makes the shared task data unique and worth a separate release to enable later comparison with new parsing algorithms. The package also contains a number of Perl and Python scripts that have been used to process the data during preparation and during the shared task. Finally, the package includes the official primary submission of each team participating in the shared task.
|
|
Keyword:
dependency; enhanced universal dependencies; parsing; shared task; syntax; treebank
|
|
URL: http://hdl.handle.net/11234/1-3728
|
|
BASE
|
|
Hide details
|
|
17 |
Can Character-based Language Models Improve Downstream Task Performance in Low-Resource and Noisy Language Scenarios? ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Synthetic Data Augmentation for Zero-Shot Cross-Lingual Question Answering ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|